Abundance of Deer in Victoria

Determining abundance of deer species in Victoria using camera trap data

Justin G Cally https://justincally.github.io/blog/ (Arthur Rylah Institute)https://www.ari.vic.gov.au/
2023-07-06

Methods used to estimate deer density

Camera detection

The density of deer at a given site can be estimated using camera-trap distance sampling (CTDS). CTDS is a modified form of distance-sampling that allows us to infer the probability a given individual will be detected within the survey area (area in front of the camera). This detection probability is a function of the distance of the individual from the camera, whereby individuals entering the camera field of view further from the camera have a lower detection probability up to a given truncation distance from the camera where detection probability is near-zero.

An underlying assumption about CTDS is that the probability a deer will be available for detection at any given point location within the camera field of view is equal. Under this assumption, for a point transect, we take into account the total area for each distance-bin area, which increases at further distance bins. However, in this study we implement a novel method that considers group size of the detected species in the availability calculations. For larger groups, CTDS should account for the availability of the closest individual rather than the availability of all individuals. When group size increases it is more likely the the closest individual from the group is closer to the camera. If we assume that the triggering of the camera is dependent upon the closest individual than we must adjust our estimated availability to account for variable group sizes. If we do not adjust for group size and only use the distance to the closest individual for our DS models then we will likely underestimate the detection rate as distances will be smaller than reality. Alternatively, if we record distances to multiple individuals in the same photo and take an average or model them independently we would likely overestimate detection probability because individuals at further distances are recorded because a closer individual has triggered the camera trap. For more information on how group size is likely to effect CTDS abundance estimates a simulation study has been written here: https://justincally.github.io/blog/posts/2022-11-10-ctds-for-groups/

In this study we investigate two possible detection functions that may explain how detection rates ‘fall off’ over the distances from the camera. Firstly a half-normal detection function given by:

\[p = exp(-y^2/2\sigma^2) \] Hereby \(p\) is the probability of detection, \(y\) is the distance from the camera (midpoint of the detection bin), and \(\sigma\) is the shape parameter. Alternatively, we compare the fit of this detection function to a hazard-rate function, where detection probability is given by:

\[p = 1 - exp(-(y/\sigma)^{-\theta})\] In these hazard-rate models, an additional parameter (\(\theta\)) is estimated, which is scale parameter. Given that detection rate may not necessarily be uniform across sites we model the magnitude of the shape parameter across sites (i) as:

\[log(\sigma_i) = \alpha + \beta_{det} \times HU_i\] Where \(HU_i\) is the amount of herbaceous understorey measured surrounding the camera. \(\beta_{det}\) is the coefficient for the effect of herbaceous understorey on detection and \(\alpha\) is the intercept.

Using these estimates of detection probability in different bins we can then estimate the average detection probability of an individual at a given camera station. This can then be used to account for imperfect detection when fitting observed counts from the camera trap to parameters estimating the true abundance at a site.

Modelling density

Modelling abundnace using camera trap data

The density at a site is modelled as a function of the observed counts, relative frequency of group sizes, distance-sampling detection probability, survey effort (area in front of camera x time/snapshot moments the camera is deployed for), proportion of time within a 24-hour cycle that deer were active (\(activity\)). This parameter was estimated using the activity R package and included in the model using an informative beta-distribution prior. For a given species the number of detections of groups (1 or more deer) during the deployment is estimated with:

\[C_{n,j} = NegBinomial2(exp(log(\lambda_{n}) + log(p_{n,j}) + log(activity) + log(\epsilon gs_{n,j}) + log(\bar p)), \phi_n) \times survey\ area_n\]
Where \(C_{n,j}\) is the observed count of group size (\(j\)) at a site (\(n\)). \(\lambda_{n}\) is the true density at a site. In this model \(X_n\) are the values of the fixed-effect covariates, \(\beta\) are the fixed effect coefficients and \(\epsilon bioregion_n\) is the bioregion random effect drawn from a standard normal distribution:

\[\lambda_{n} = X_n \times \beta + \epsilon bioregion_n\]

The \(\epsilon gs_{n,j}\) is the proportional variation in group size ar a site, where \(\sum \epsilon gs_{n,j \in [1,max(group\ size)]} = 1\). Assuming that the rate of group sizes can vary between sites, we model the variation in group size between sites with a group-size level intercept (\(\zeta_j\)) and site-group-size level random effect (\(\epsilon site_{n,j}\)):

\[\epsilon psi_{nj} = exp(\zeta_j + \epsilon site_{n,j}) \] The proportional group size at a given site and for a given group size is thus given by:

\[\epsilon gs_{nj} = \frac{\epsilon psi_{nj}}{sum(\epsilon psi_{n})}\]

We opt for a negative binomial model to account for overdispersion in counts/abundance. The parameterization of the negative binomial model implemented here (NegBinomial2) contains a mean as expressed by the log-sum of the first five terms. It also contains a dispersion parameter (\(\phi_n\)), which is the overdispersion at the site. \(\phi_n\) is modelled as a function of bioregion:

\[\phi_n = exp(\chi mu_{bioregion_n} + \chi sd_{bioregion_n} \times \chi raw_{bioregion_n})\]

Supplementing the camera trap data with transect-level data

Our surveys involved the deployment of cameras at a site for usually 6-8 weeks as well as 150m transect searches that were walked up and back by a single individual for two transects, and one transect which was walked once by two observers. This gives us an approximate survey area at each site of 150m x 2 x 3 = 900m. Deer sign was recorded on these transects with pellets, footprints, rubbings, and wallows recorded as either present or absent.

This data provides supplemental presence-absence data that can be integrated into our model to help inform likely densities at sites where cameras did not record observations but one or more deer signs were detected along the transects.

The foundation for this integration is a Royle-Nichols model (Royle and Nichols 2003) that models the frequency of detection as a mixture of detection probability and abundance. Given that we also collect absolute abundance estimates via the camera trap data, relative abundance measures from this output can be transformed to absolute abundance and thus provide an avenue to integrate camera and transect-based data into a single model estimating abundance. The model can be expressed as:

\[N_n \sim NegBinomial2(\lambda_n, \bar\phi)\] Where \(\bar\phi\) is the over-dispersion term of the Royle-Nichols component of the model. Conditional detection probability is expressed based on the probability of detection from a given visit/survey type (\(j\)) (e.g. camera, pellet transect, footprint transect), and the number of individuals available to be detected at the site (\(N_n\)):

\[p_{nj} = 1 - (1 - r_{nj}) ^ {N_n}\] The model for this detection probability is then:

\[y_{nj} \sim Bernoulli(p_{nj})\] This model thus allows for \(\lambda_n\) to be modelled via the camera counts and the results of detections along the transect.

Finally, the Royle-Nichols model also allows us to estimate the rates at which with camera trap did not detect individuals at a site when they were present. Specifically we can use the average probability of detection of at least one individual at the camera trap, given individuals have expected abundance lambda (\(\bar p\)).

Setup

Load in relevant packages for analysis, additionally, connect to the database. Camera trap and site data is stored on the database.

Show code
library(cmdstanr)
library(dplyr)
library(sf)
library(bayesplot)
library(loo)
library(gt)
library(terra)
library(caret)
library(tidyterra)
library(tidyr)
library(VicmapR)
library(kableExtra)
library(brms)
 options(brms.backend = "cmdstanr")
check_cmdstan_toolchain(fix = TRUE, quiet = TRUE)
register_knitr_engine(override = FALSE)
options(mc.cores=8)

#### Database connection ####
con <- weda::weda_connect(password = keyring::key_get(service = "ari-dev-weda-psql-01",
                                                username = "psql_user"), username = "psql_user")

Custom Functions

Additional functions used in the data preparation, modelling and analysis are available in the /functions directory.

Show code
# Source internal functions
sapply(list.files("functions", full.names = T), source, verbose = F)

Prepare Data

Wrangle and format data for the STAN models for the various species.

Scope of models

Outline which species should be modeled, and which projects to source data from.

Show code
# Species to run model for.
deer_species_all <- c("Cervus unicolor", "Dama dama", "Cervus elaphus", "Axis porcinus")
species_names <- c("Sambar Deer", "Fallow Deer", "Red Deer", "Hog Deer")
# Projects to select.
project_short_name <- c("hog_deer_2023", "StatewideDeer")
# Buffer for data extraction.
spatial_buffer <- 1000
# Covariate Rasters
raster_files <- "/Volumes/DeerVic\ Photos/Processed_Rasters"
# raster_files <- "data/prediction_raster"
prediction_raster <- "data/prediction_raster/statewide_raster.tif"
# For the integrated model we place limits on the maximum density of deer to integrate over. In cases where there are no detections on the camera this is limited to 15 per km2. In cases where deer were detetected on the camera this is expanded to 50. We believe this is sufficiently high. Very high values of these will be less efficient. 
n_max_no_det <- 15
n_max_det <- 30

Camera locations

Download the camera locations from the database, this table outlines the locations and the deployment history of the cameras.

Show code
cams_curated <- tbl(con, dbplyr::in_schema("camtrap", "curated_camtrap_operation")) %>%
  dplyr::filter(ProjectShortName %in% !!project_short_name) %>% # Only retrieve cam records for hog deer 2023
  dplyr::collect() %>%
  dplyr::arrange(SiteID) %>%
  sf::st_as_sf(., coords = c("Longitude", "Latitude"), crs = 4283) 

n_site <- nrow(cams_curated)

Formulas for detection and abundance

Here we outline formulas to be used in the models. The formulas account for the various fixed-effect parameters.

Show code
#### Model formulas ####

#### Transect Formula: Survey Only ####
transect_formula <- ~Survey

#### Abundance Formula Options #### 
ab_formula_1 <- ~ scale(sqrt(BareSoil)) + scale(sqrt(Nitrogen)) + scale(log(PastureDistance)) + scale(BIO15) + scale(sqrt(ForestEdge)) + 
  scale(sqrt(MRVBF)) + scale(sqrt(SLOPE)) + scale(sqrt(NonPhotosyntheticVeg))

ab_formula_2 <- ~ scale(sqrt(BareSoil)) + scale(sqrt(Nitrogen)) + scale(log(PastureDistance)) + scale(BIO15) + scale(sqrt(ForestEdge)) + scale(sqrt(MRVBF)) 

ab_formula_3 <- ~ scale(sqrt(BareSoil)) + scale(sqrt(Nitrogen)) + scale(log(PastureDistance)) + scale(BIO15) + scale(sqrt(ForestEdge)) 

ab_formula_4 <- ~ 1

#### Detection Formula: Distance-sampling ####
det_formula <- ~ scale(HerbaceousUnderstoryCover) # average detection across all sites 

Create model data

Using the prepare_model_data() function we generate the data for the STAN model. This function will:
1. Download data from the database 2. Format that data to match the distance-sampling bins
3. Organise the counts into sites and group sizes
4. Generate model matrix of the various submodels (distance-sampling, abundance and transect) 5. Generate data for the prediction process 6. Generate data for the random effect (bioregion)
7. Generate data for regional predictions (indexing based on DEECA regions)

Show code
if(!file.exists("data/multispecies_data.rds")) {
multispecies_data <- prepare_model_data_multispecies(species = deer_species_all,
                                 projects = project_short_name,
                     buffer = spatial_buffer,
                     detection_formula = det_null,
                     abundance_formula = ab_formula_1,
                     transect_formula = transect_formula,
                     con = con,
                     raster_dir = raster_files,
                     prediction_raster = prediction_raster,
                     n_max_no_det = n_max_no_det,
                     n_max_det = n_max_det,
                     evaltransects = TRUE, 
                     filter_behaviour = TRUE)

multispecies_data$keyfun <- 0 # 0 = HN
multispecies_data$raw_data[is.na(multispecies_data$raw_data)] <- 0
multispecies_data$transects[is.na(multispecies_data$transects)] <- 0

evc_groups <- readRDS("data/evc_groupnames.rds")

multispecies_data <- c(multispecies_data, evc_groups)

saveRDS(multispecies_data, "data/multispecies_data.rds")
} else {
  multispecies_data <- readRDS("data/multispecies_data.rds")
}

Model Execution

MCMC settings

Below we list the MCMC setting for our model. We run models on eight parallel chains for 400 iterations each (200 warmup and 200 sampling). These setting provide us with 1,600 posterior draws (8 x 200).

Show code
# STAN settings
ni <- 200 # sampling iterations
nw <- 200 # warmup iterations 
nc <- 8 # number of chains

Model Options

We generate a set of models to evaluate against one another. We compare four different fixed-efect formulas for abundance

Show code
#Models to run 
models_to_run <- data.frame(formula = c("ab_formula_1", 
                                        "ab_formula_2",
                                        "ab_formula_3",
                                        "ab_formula_4",  
                                        "ab_formula_3",
                                        "ab_formula_3", 
                                        "ab_formula_4"), 
                            keyfun = c(0,0,0,0,1,0,0), 
                            evc = c(F,F,F,F,F,T,T), 
                            det_form = c("det_formula", 
                                         "det_formula",
                                         "det_formula",
                                         "det_formula",
                                         "det_formula",
                                         "det_formula",
                                         "det_formula"))

Read in Models

These are the models used in the analysis. The first is an integrated model that requires transect and camera data. The second is a count-only model that just requires the camera data.

Show code
functions {

    /* Half-normal function
   * Args:
   *   sigma: sigma
   *   midpoints: midpoints
   * Returns:
   *   detection probability
   */
    vector halfnorm(real sigma, vector midpoints) {
      int bins = rows(midpoints);
      vector[bins] p_raw; // detection probability

      p_raw = exp(-(midpoints)^2/(2*sigma^2));
      return p_raw;
    }

      /* Hazard function
   * Args:
   *   sigma: sigma
   *   theta: theta
   *   midpoints: midpoints
   * Returns:
   *   detection probability
   */
    vector hazard(real sigma, real theta, vector midpoints) {
      int bins = rows(midpoints);
      vector[bins] p_raw; // detection probability

      p_raw = 1 - exp(-(midpoints/sigma)^(-theta));
      return p_raw;
    }

  vector prob_dist(real sigma, real theta, int keyfun, vector midpoints){
  int bins = rows(midpoints);
  vector[bins] out; // detection probability

  if(keyfun == 0){
    out = halfnorm(sigma, midpoints);
  } else if(keyfun == 1){
    out = hazard(sigma, theta, midpoints);
  }
  return out;
  }
}
data {
  int<lower=0> N;                      // number of observations
  int<lower=0> S;                      // number of species
  real delta;                          // bin width
  int<lower=1> n_site;                 // site
  int<lower=1> n_distance_bins;        // distance bins
  int<lower=1> n_gs;                   // number of group sizes
  vector[n_gs] gs;                     // group sizes
  vector[n_distance_bins] midpts;      // midpt of each interval
  real<lower=1> max_distance;         // truncation distance
  int<lower=1> max_int_dist;          // max distance as an integer
  real<lower=0> theta_frac;           // fraction of camera view
  array[n_site] int effort;           // effort
  array[n_site, n_gs, S] int n_obs;      //number of observations
  array[n_site, n_distance_bins, n_gs] int y; // observations matrix

  // summary of whether species is known to be present at each site
  int<lower = 0, upper = 1> any_seen[S, n_site];

  // number of surveys at each site
  int<lower = 0> n_survey[n_site];

  // availability prior information
  real<lower=0> bshape;               // availability shape
  real<lower=0> bscale;               // availability scale

  // camera trap detection parameters
  int<lower=0> det_ncb;                 // number of covariates for detection model
  matrix[n_site, det_ncb] det_model_matrix; // detection model matrix
  array[n_gs, n_distance_bins] real pa ; // averaged availability for multiple individuals

  // Abundance/occupancy model matrix
  int<lower = 1> m_psi;
  matrix[n_site, m_psi] X_psi;
  // negbinom scale
  // real reciprocal_phi_scale;

  //transect level information
  int<lower=1> trans;                  // total number of transects across all sites for all methods
  array[S, trans] int<lower = 0, upper = 1> y2; // transect based binomial detections
  int<lower = 0, upper = trans> start_idx[n_site];
  int<lower = 0, upper = trans> end_idx[n_site];
  int<lower=1> trans_det_ncb;           // number of covariates for transect detection model
  matrix[trans, trans_det_ncb] trans_pred_matrix; // transect detection model matrix
  int<lower=1>  n_max[n_site, S]; // max for poisson RN

  // Prediction data
  int<lower=1> npc;                 // number of prediction grid cells
  matrix[npc, m_psi] X_pred_psi; // pred matrix
  vector[npc] prop_pred; //offset
  // bioregion RE
  int<lower=1> np_bioreg;
  int<lower=1> site_bioreg[n_site];
  int<lower=1> pred_bioreg[npc];
    // region data
  int<lower=1> np_reg;
  int<lower=1> site_reg[n_site];
  int<lower=1> pred_reg[npc];

  // key function, 0 = halfnorm
  int keyfun;
}

transformed data {
  // vector[n_distance_bins] pi; // availability for point
  vector[n_site] survey_area;
  array[S, n_site] real cam_seen;
    for(i in 1:n_site) {
      survey_area[i] = theta_frac * effort[i] * pi() * (max_distance/1000)^2;
      for(s in 1:S) {
      cam_seen[s,i] = sum(n_obs[i,,s]);
      }
    }

}

parameters {
 // abundance parameters
  //array[n_site] simplex[n_gs] eps_ngs; // random variation in group size
  array[S] vector[m_psi] beta_psi;
  vector[det_ncb] beta_det;
  real log_theta;
  // transect detection parameters
  vector[trans_det_ncb] beta_trans_det;
  // temporal availability parameters
  real<lower=0, upper=1> activ;
  // bioregion RE
  array[S] real<lower=0> bioregion_sd;
  array[S] vector[np_bioreg] bioregion_raw;
  // eps group size params
  array[S] vector[n_gs] zeta;
  array[S] matrix[n_site, n_gs] eps_raw;
  array[S] real<lower=0> grp_sd;
  // od
  array[S] real od_mu;
  array[S] real<lower=0> od_sd;
  array[S] vector[np_bioreg] od_raw;
  real odRNmu;
}

transformed parameters {
  // random effects
  array[S] vector[np_bioreg] eps_bioregion; // bioregion random effect
  // distance parameters
  array[n_site] real log_sigma;
  array[n_site] real sigma;
  array[n_site] vector[n_distance_bins] p_raw; // detection probability
  array[n_site, n_distance_bins, n_gs] real p_raw_scale; // detection probability for point independence model and accounting for availability
  array[n_site, n_distance_bins, n_gs] real<upper=0> log_p_raw; // log detection probability accounting for availability
  array[n_site, n_gs] real log_p; // log probability with summed exponential for the multinomial logit model
  array[n_site, n_gs] real<lower=0,upper=1> p; // log probability with summed exponential for the multinomial logit model
  // abundance parameters
  array[S, n_site, n_gs] real<lower=0> lambda;
  // activity parameters
  real log_activ = log(activ);
  vector[trans] logit_trans_p = trans_pred_matrix * beta_trans_det; // observation process model
  // lp_site for RN model
  array[S, n_site] real lp_site;
  vector<lower=0,upper=1>[trans] r = inv_logit(logit_trans_p);
  array[S] vector[n_site] log_lambda_psi;
  // eps group size
  array[S, n_site] simplex[n_gs] eps_ngs; // random variation in group size
  array[S, n_site] vector[n_gs] epsi;
  array[S] matrix[n_site, n_gs] eps_site;
  real<lower=0> theta = exp(log_theta);
  array[S] vector[np_bioreg] od; // bioregion random effect
  real odRN = exp(odRNmu);  // bioregion random effect
  array[S, n_site] real<lower=0, upper=1> pbar;

  for(s in 1:S) {
    for(b in 1:np_bioreg) {
    eps_bioregion[s,b] = bioregion_sd[s] * bioregion_raw[s,b];
    od[s,b] = exp(od_mu[s] + od_sd[s] * od_raw[s,b]);
  }
  }



for(n in 1:n_site) {
  log_sigma[n] = det_model_matrix[n,] * beta_det;
  sigma[n] = exp(log_sigma[n]);
  p_raw[n] = prob_dist(sigma[n], theta, keyfun, midpts);
  for (i in 1:n_distance_bins) {
      // assuming a half-normal detection fn from line
      // p_raw[n,i] = exp(-(midpts[i])^2/(2*sigma[n]^2)); // half-normal function (pg 419 of AHM)
      // hazard function
      // p_raw[n,i] = 1 - exp(-(midpts[i]/theta)^(-sigma[n])); // hazard function (pg 507 of AHM)
    for(j in 1:n_gs) {
      p_raw_scale[n,i,j] = p_raw[n,i]*pa[j,i]; //  pr(animal occurs and is detected in bin i)
      log_p_raw[n,i,j] = log(p_raw_scale[n,i,j]);
      }
  }

  for(s in 1:S) {
  vector[n_max[n,s]+1] Nlik;
  vector[n_max[n,s]+1] gN;
  vector[n_max[n,s]+1] pcond;
  log_lambda_psi[s,n] = X_psi[n,] * beta_psi[s] + eps_bioregion[s,site_bioreg[n]];

  for(j in 1:n_gs) {
    eps_site[s, n,j] = grp_sd[s] * eps_raw[s,n,j];
    epsi[s,n,j] = exp(zeta[s,j] + eps_site[s,n,j]);
  }
  eps_ngs[s,n,] = epsi[s,n,]/sum(epsi[s,n,]);

  // p-bar

  for(k in 1:(n_max[n,s]+1))
    Nlik[k] = exp(neg_binomial_2_log_lpmf(k-1 | log_lambda_psi[s,n], odRN));

  gN = Nlik/sum(Nlik);

  for(k in 1:(n_max[n,s]+1))
    pcond[k] = (1 - (1 - r[start_idx[n]])^(k-1)) * gN[k];

  pbar[s,n] = sum(pcond);

  }

  for(j in 1:n_gs) {
    log_p[n,j] = log_sum_exp(log_p_raw[n,,j]);
    p[n,j] = exp(log_p[n,j]);
    // model site abundance
    for(s in 1:S) {
    lambda[s,n,j] = exp(log_lambda_psi[s,n] + log_p[n,j] + log_activ + log(eps_ngs[s,n,j]) + log(pbar[s,n])) .* survey_area[n];
    }
  }

// Royle-Nichols implementation in STAN (looping over possible discrete values of N)
// https://discourse.mc-stan.org/t/royle-and-nichols/14150
// https://discourse.mc-stan.org/t/identifiability-across-levels-in-occupancy-model/5340/2
    for(s in 1:S) {
if (n_survey[n] > 0) {
  vector[n_max[n,s]] lp;
    if(any_seen[s,n] == 0){ // not seen
      lp[1] = log_sum_exp(neg_binomial_2_log_lpmf(0 | log_lambda_psi[s,n], odRN), neg_binomial_2_log_lpmf(1 | log_lambda_psi[s,n], odRN) + bernoulli_lpmf(y2[s,start_idx[n]:end_idx[n]] | r[start_idx[n]:end_idx[n]]));
    } else {
      lp[1] = neg_binomial_2_log_lpmf(1 | log_lambda_psi[s,n], odRN) + bernoulli_lpmf(y2[s,start_idx[n]:end_idx[n]] | r[start_idx[n]:end_idx[n]]);
    }
    for (k in 2:n_max[n,s]){
      lp[k] = neg_binomial_2_log_lpmf(k | log_lambda_psi[s,n], odRN) + bernoulli_lpmf(y2[s,start_idx[n]:end_idx[n]] | 1-(1-r[start_idx[n]:end_idx[n]])^k);
    }
    lp_site[s,n] = log_sum_exp(lp);
    } else {
    lp_site[s, n] = 0;
    }
  }
    }
}

model {

  for(s in 1:S) {
    beta_psi[s] ~ normal(0, 3); // prior for intercept in poisson model
    bioregion_sd[s] ~ normal(0,2);
    bioregion_raw[s,] ~ normal(0,1);
    to_vector(eps_raw[s,,]) ~ std_normal();
    grp_sd[s] ~ normal(0, 1);
    zeta[s,] ~ normal(0, 2);
    // od
    od_sd[s] ~ normal(0,1);
    od_mu[s] ~ normal(0,1);
    od_raw[s,] ~ normal(0,1);
  }
  odRNmu ~ normal(0,1);
  beta_trans_det ~ normal(0, 2); // beta for transect detection
  beta_det ~ normal(0, 4); // prior for sigma
  activ ~ beta(bshape, bscale);  //informative prior
  log_theta ~ normal(0,2); // prior for theta

  for(n in 1:n_site) {
  for(j in 1:n_gs) {
  for(s in 1:S) {
  target += neg_binomial_2_lpmf(n_obs[n,j,s] | lambda[s,n,j], od[s, site_bioreg[n]]);
        }
  y[n,,j] ~ multinomial_logit(to_vector(log_p_raw[n,,j]));
  }
  for(s in 1:S) {
  target += lp_site[s, n];
  }
}
}

generated quantities {
  array[S, n_site,n_gs] real n_obs_pred;
  array[S, n_site, n_gs] real n_obs_true;
  array[S, n_site] real N_site;
  array[S, n_site] real N_site_pred;
  array[n_site, max_int_dist+1] real DetCurve;
  array[n_site, n_gs] real log_lik1;
  array[S, n_site, n_gs] real log_lik2;
  array[n_site, n_gs] real log_lik2_site;
  array[n_site] real log_lik;
  //array[S, n_site] real Site_lambda;
  real av_gs[S];
  array[S] simplex[n_gs] eps_gs_ave;
  array[S, npc] real pred;
  //array[S, npc] real pred_trunc;
  array[S, np_reg] real Nhat_reg;
  array[S, np_bioreg] real Nhat_bioreg;
  // array[np_reg] real Nhat_reg_design;
  real Nhat[S];
  //real Nhat_trunc[S];
  real Nhat_sum;
  int trunc_counter[S];
  trunc_counter[S] = 0;

for(s in 1:S) {
  eps_gs_ave[s] = exp(zeta[s])/sum(exp(zeta[s]));
  av_gs[s] = sum(gs .* eps_gs_ave[s]); //average group size
}


for(n in 1:n_site) {
  for(j in 1:n_gs) {
  log_lik1[n,j] = multinomial_logit_lpmf(y[n,,j] | to_vector(log_p_raw[n,,j])); //for loo
  for(s in 1:S) {
  log_lik2[s,n,j] = neg_binomial_2_lpmf(n_obs[n,j,s] | lambda[s,n,j], od[s, site_bioreg[n]]); //for loo
  n_obs_true[s, n, j] = gs[j] * (neg_binomial_2_log_rng(log_lambda_psi[s,n] + log(eps_ngs[s,n,j]), od[s, site_bioreg[n]]));
  n_obs_pred[s, n,j] = gs[j] * neg_binomial_2_rng(lambda[s,n,j], od[s, site_bioreg[n]]);
  }
  log_lik2_site[n, j] = log_sum_exp(log_lik2[,n,j]);
    }
    // get loglik on a site level
    log_lik[n] = log_sum_exp(log_sum_exp(log_sum_exp(log_lik1[n,]),
    log_sum_exp(log_lik2_site[n,])), log_sum_exp(lp_site[,n]));
      for(s in 1:S) {
    //Site_lambda[s,n] = exp(log_lambda_psi[s,n]);
    N_site[s,n] = sum(n_obs_true[s,n,]);
    N_site_pred[s,n] = sum(n_obs_pred[s,n,]);
      }

  // loop over distance bins
  if(keyfun == 0) {
  for(j in 0:max_int_dist) { // get DS predictions for distance 0 to max bin distance
    DetCurve[n, j+1] = exp(-(j+0.5)^2/(2*sigma[n]^2)); // half normal
    }
  } else if(keyfun == 1) {
    for(j in 0:max_int_dist) { // get DS predictions for distance 0 to max bin distance
    DetCurve[n, j+1] =  1 - exp(-(j+0.5/sigma[n])^(-theta)); //hazard rate
    }
  }
}

for(s in 1:S) {
for(i in 1:np_reg) Nhat_reg[s,i] = 0;
for(i in 1:np_bioreg) Nhat_bioreg[s,i] = 0;

for(i in 1:npc) {
  pred[s,i] = neg_binomial_2_log_rng(X_pred_psi[i,] * beta_psi[s] + eps_bioregion[s, pred_bioreg[i]], od[s, pred_bioreg[i]]) * prop_pred[i] * av_gs[s]; //offset
  if(pred[s,i] > max(N_site[s,])) {
    //pred_trunc[s,i] = max(N_site[s,]);
    trunc_counter[s] += 1;
  }
  Nhat_reg[s,pred_reg[i]] += pred[s,i];
  Nhat_bioreg[s,pred_bioreg[i]] += pred[s,i];
}
Nhat[s] = sum(pred[s,]);
//Nhat_trunc[s] = sum(pred_trunc[s,]);
}
Nhat_sum = sum(Nhat);
}
Show code
functions {

    /* Half-normal function
   * Args:
   *   sigma: sigma
   *   midpoints: midpoints
   * Returns:
   *   detection probability
   */
    vector halfnorm(real sigma, vector midpoints) {
      int bins = rows(midpoints);
      vector[bins] p_raw; // detection probability

      p_raw = exp(-(midpoints)^2/(2*sigma^2));
      return p_raw;
    }

      /* Hazard function
   * Args:
   *   sigma: sigma
   *   theta: theta
   *   midpoints: midpoints
   * Returns:
   *   detection probability
   */
    vector hazard(real sigma, real theta, vector midpoints) {
      int bins = rows(midpoints);
      vector[bins] p_raw; // detection probability

      p_raw = 1 - exp(-(midpoints/sigma)^(-theta));
      return p_raw;
    }

  vector prob_dist(real sigma, real theta, int keyfun, vector midpoints){
  int bins = rows(midpoints);
  vector[bins] out; // detection probability

  if(keyfun == 0){
    out = halfnorm(sigma, midpoints);
  } else if(keyfun == 1){
    out = hazard(sigma, theta, midpoints);
  }
  return out;
  }
}
data {
  int<lower=0> N;                      // number of observations
  int<lower=0> S;                      // number of species
  real delta;                          // bin width
  int<lower=1> n_site;                 // site
  int<lower=1> n_distance_bins;        // distance bins
  int<lower=1> n_gs;                   // number of group sizes
  vector[n_gs] gs;                     // group sizes
  vector[n_distance_bins] midpts;      // midpt of each interval
  real<lower=1> max_distance;         // truncation distance
  int<lower=1> max_int_dist;          // max distance as an integer
  real<lower=0> theta_frac;           // fraction of camera view
  array[n_site] int effort;           // effort
  array[n_site, n_gs, S] int n_obs;      //number of observations
  array[n_site, n_distance_bins, n_gs] int y; // observations matrix

  // summary of whether species is known to be present at each site
  int<lower = 0, upper = 1> any_seen[S, n_site];

  // number of surveys at each site
  int<lower = 0> n_survey[n_site];

  // availability prior information
  real<lower=0> bshape;               // availability shape
  real<lower=0> bscale;               // availability scale

  // camera trap detection parameters
  int<lower=0> det_ncb;                 // number of covariates for detection model
  matrix[n_site, det_ncb] det_model_matrix; // detection model matrix
  array[n_gs, n_distance_bins] real pa ; // averaged availability for multiple individuals

  // Abundance/occupancy model matrix
  int<lower = 1> m_psi;
  matrix[n_site, m_psi] X_psi;
  // negbinom scale
  // real reciprocal_phi_scale;

  //transect level information
  int<lower=1> trans;                  // total number of transects across all sites for all methods
  array[S, trans] int<lower = 0, upper = 1> y2; // transect based binomial detections
  int<lower = 0, upper = trans> start_idx[n_site];
  int<lower = 0, upper = trans> end_idx[n_site];
  int<lower=1> trans_det_ncb;           // number of covariates for transect detection model
  matrix[trans, trans_det_ncb] trans_pred_matrix; // transect detection model matrix
  int<lower=1>  n_max[n_site, S]; // max for poisson RN

  // Prediction data
  int<lower=1> npc;                 // number of prediction grid cells
  matrix[npc, m_psi] X_pred_psi; // pred matrix
  vector[npc] prop_pred; //offset
  // bioregion RE
  int<lower=1> np_bioreg;
  int<lower=1> site_bioreg[n_site];
  int<lower=1> pred_bioreg[npc];
    // region data
  int<lower=1> np_reg;
  int<lower=1> site_reg[n_site];
  int<lower=1> pred_reg[npc];
  // evc data
  int<lower=1> np_evc;
  int<lower=1> site_evc[n_site];
  int<lower=1> pred_evc[npc];


  // key function, 0 = halfnorm
  int keyfun;
}

transformed data {
  // vector[n_distance_bins] pi; // availability for point
  vector[n_site] survey_area;
  array[S, n_site] real cam_seen;
    for(i in 1:n_site) {
      survey_area[i] = theta_frac * effort[i] * pi() * (max_distance/1000)^2;
      for(s in 1:S) {
      cam_seen[s,i] = sum(n_obs[i,,s]);
      }
    }

}

parameters {
 // abundance parameters
  //array[n_site] simplex[n_gs] eps_ngs; // random variation in group size
  array[S] vector[m_psi] beta_psi;
  vector[det_ncb] beta_det;
  real log_theta;
  // transect detection parameters
  vector[trans_det_ncb] beta_trans_det;
  // temporal availability parameters
  real<lower=0, upper=1> activ;
  // bioregion RE
  array[S] real<lower=0> bioregion_sd;
  array[S] vector[np_bioreg] bioregion_raw;
  // evc RE
  real<lower=0> evc_sd;
  array[S] vector[np_evc] evc_raw;
  // eps group size params
  array[S] vector[n_gs] zeta;
  array[S] matrix[n_site, n_gs] eps_raw;
  array[S] real<lower=0> grp_sd;
  // od
  array[S] real od_mu;
  array[S] real<lower=0> od_sd;
  array[S] vector[np_bioreg] od_raw;
  real odRNmu;
}

transformed parameters {
  // random effects
  array[S] vector[np_bioreg] eps_bioregion; // bioregion random effect
  array[S] vector[np_evc] eps_evc; // bioregion random effect
  // distance parameters
  array[n_site] real log_sigma;
  array[n_site] real sigma;
  array[n_site] vector[n_distance_bins] p_raw; // detection probability
  array[n_site, n_distance_bins, n_gs] real p_raw_scale; // detection probability for point independence model and accounting for availability
  array[n_site, n_distance_bins, n_gs] real<upper=0> log_p_raw; // log detection probability accounting for availability
  array[n_site, n_gs] real log_p; // log probability with summed exponential for the multinomial logit model
  array[n_site, n_gs] real<lower=0,upper=1> p; // log probability with summed exponential for the multinomial logit model
  // abundance parameters
  array[S, n_site, n_gs] real<lower=0> lambda;
  // activity parameters
  real log_activ = log(activ);
  vector[trans] logit_trans_p = trans_pred_matrix * beta_trans_det; // observation process model
  // lp_site for RN model
  array[S, n_site] real lp_site;
  vector<lower=0,upper=1>[trans] r = inv_logit(logit_trans_p);
  array[S] vector[n_site] log_lambda_psi;
  // eps group size
  array[S, n_site] simplex[n_gs] eps_ngs; // random variation in group size
  array[S, n_site] vector[n_gs] epsi;
  array[S] matrix[n_site, n_gs] eps_site;
  real<lower=0> theta = exp(log_theta);
  array[S] vector[np_bioreg] od; // bioregion random effect
  real odRN = exp(odRNmu);  // bioregion random effect
  array[S, n_site] real<lower=0, upper=1> pbar;

  for(s in 1:S) {
    for(b in 1:np_bioreg) {
    eps_bioregion[s,b] = bioregion_sd[s] * bioregion_raw[s,b];
    od[s,b] = exp(od_mu[s] + od_sd[s] * od_raw[s,b]);
  }
  for(k in 1:np_evc) {
  eps_evc[s,k] = evc_sd * evc_raw[s,k];
}
  }

for(n in 1:n_site) {
  log_sigma[n] = det_model_matrix[n,] * beta_det;
  sigma[n] = exp(log_sigma[n]);
  p_raw[n] = prob_dist(sigma[n], theta, keyfun, midpts);
  for (i in 1:n_distance_bins) {
      // assuming a half-normal detection fn from line
      // p_raw[n,i] = exp(-(midpts[i])^2/(2*sigma[n]^2)); // half-normal function (pg 419 of AHM)
      // hazard function
      // p_raw[n,i] = 1 - exp(-(midpts[i]/theta)^(-sigma[n])); // hazard function (pg 507 of AHM)
    for(j in 1:n_gs) {
      p_raw_scale[n,i,j] = p_raw[n,i]*pa[j,i]; //  pr(animal occurs and is detected in bin i)
      log_p_raw[n,i,j] = log(p_raw_scale[n,i,j]);
      }
  }

  for(s in 1:S) {
  vector[n_max[n,s]+1] Nlik;
  vector[n_max[n,s]+1] gN;
  vector[n_max[n,s]+1] pcond;
  log_lambda_psi[s,n] = X_psi[n,] * beta_psi[s] + eps_bioregion[s,site_bioreg[n]] + eps_evc[s,site_evc[n]];

  for(j in 1:n_gs) {
    eps_site[s, n,j] = grp_sd[s] * eps_raw[s,n,j];
    epsi[s,n,j] = exp(zeta[s,j] + eps_site[s,n,j]);
  }

  eps_ngs[s,n,] = epsi[s,n,]/sum(epsi[s,n,]);

  // p-bar
  for(k in 1:(n_max[n,s]+1))
    Nlik[k] = exp(neg_binomial_2_log_lpmf(k-1 | log_lambda_psi[s,n], odRN));

  gN = Nlik/sum(Nlik);

  for(k in 1:(n_max[n,s]+1))
    pcond[k] = (1 - (1 - r[start_idx[n]])^(k-1)) * gN[k];

  pbar[s,n] = sum(pcond);

  }

  for(j in 1:n_gs) {
    log_p[n,j] = log_sum_exp(log_p_raw[n,,j]);
    p[n,j] = exp(log_p[n,j]);
    // model site abundance
    for(s in 1:S) {
    lambda[s,n,j] = exp(log_lambda_psi[s,n] + log_p[n,j] + log_activ + log(eps_ngs[s,n,j]) + log(pbar[s,n])) .* survey_area[n];
    }
  }

// Royle-Nichols implementation in STAN (looping over possible discrete values of N)
// https://discourse.mc-stan.org/t/royle-and-nichols/14150
// https://discourse.mc-stan.org/t/identifiability-across-levels-in-occupancy-model/5340/2
    for(s in 1:S) {
if (n_survey[n] > 0) {
  vector[n_max[n,s]] lp;
    if(any_seen[s,n] == 0){ // not seen
      lp[1] = log_sum_exp(neg_binomial_2_log_lpmf(0 | log_lambda_psi[s,n], odRN), neg_binomial_2_log_lpmf(1 | log_lambda_psi[s,n], odRN) + bernoulli_lpmf(y2[s,start_idx[n]:end_idx[n]] | r[start_idx[n]:end_idx[n]]));
    } else {
      lp[1] = neg_binomial_2_log_lpmf(1 | log_lambda_psi[s,n], odRN) + bernoulli_lpmf(y2[s,start_idx[n]:end_idx[n]] | r[start_idx[n]:end_idx[n]]);
    }
    for (k in 2:n_max[n,s]){
      lp[k] = neg_binomial_2_log_lpmf(k | log_lambda_psi[s,n], odRN) + bernoulli_lpmf(y2[s,start_idx[n]:end_idx[n]] | 1-(1-r[start_idx[n]:end_idx[n]])^k);
    }
    lp_site[s,n] = log_sum_exp(lp);
    } else {
    lp_site[s, n] = 0;
    }
  }
    }
}

model {

  for(s in 1:S) {
    beta_psi[s] ~ normal(0, 3); // prior for intercept in poisson model
    bioregion_sd[s] ~ normal(0,2);
    bioregion_raw[s,] ~ normal(0,1);
    to_vector(eps_raw[s,,]) ~ std_normal();
    grp_sd[s] ~ normal(0, 1);
    zeta[s,] ~ normal(0, 2);
    // od
    od_sd[s] ~ normal(0,1);
    od_mu[s] ~ normal(0,1);
    od_raw[s,] ~ normal(0,1);
    // evc re
    evc_raw[s,] ~ normal(0,1);
  }
  evc_sd ~ normal(0,1);
  odRNmu ~ normal(0,1);
  beta_trans_det ~ normal(0, 2); // beta for transect detection
  beta_det ~ normal(0, 4); // prior for sigma
  activ ~ beta(bshape, bscale);  //informative prior
  log_theta ~ normal(0,2); // prior for theta

  for(n in 1:n_site) {
  for(j in 1:n_gs) {
  for(s in 1:S) {
  target += neg_binomial_2_lpmf(n_obs[n,j,s] | lambda[s,n,j], od[s, site_bioreg[n]]);
        }
  y[n,,j] ~ multinomial_logit(to_vector(log_p_raw[n,,j]));
  }
  for(s in 1:S) {
  target += lp_site[s, n];
  }
}
}

generated quantities {
  array[S, n_site,n_gs] real n_obs_pred;
  array[S, n_site, n_gs] real n_obs_true;
  array[S, n_site] real N_site;
  array[S, n_site] real N_site_pred;
  array[n_site, max_int_dist+1] real DetCurve;
  array[n_site, n_gs] real log_lik1;
  array[S, n_site, n_gs] real log_lik2;
  array[n_site, n_gs] real log_lik2_site;
  array[n_site] real log_lik;
  //array[S, n_site] real Site_lambda;
  real av_gs[S];
  array[S] simplex[n_gs] eps_gs_ave;
  array[S, npc] real pred;
  //array[S, npc] real pred_trunc;
  array[S, np_reg] real Nhat_reg;
  // array[np_reg] real Nhat_reg_design;
  real Nhat[S];
  //real Nhat_trunc[S];
  real Nhat_sum;
  int trunc_counter[S];
  trunc_counter[S] = 0;

for(s in 1:S) {
  eps_gs_ave[s] = exp(zeta[s])/sum(exp(zeta[s]));
  av_gs[s] = sum(gs .* eps_gs_ave[s]); //average group size
}


for(n in 1:n_site) {
  for(j in 1:n_gs) {
  log_lik1[n,j] = multinomial_logit_lpmf(y[n,,j] | to_vector(log_p_raw[n,,j])); //for loo
  for(s in 1:S) {
  log_lik2[s,n,j] = neg_binomial_2_lpmf(n_obs[n,j,s] | lambda[s,n,j], od[s, site_bioreg[n]]); //for loo
  n_obs_true[s, n, j] = gs[j] * (neg_binomial_2_log_rng(log_lambda_psi[s,n] + log(eps_ngs[s,n,j]), od[s, site_bioreg[n]]));
  n_obs_pred[s, n,j] = gs[j] * neg_binomial_2_rng(lambda[s,n,j], od[s, site_bioreg[n]]);
  }
  log_lik2_site[n, j] = log_sum_exp(log_lik2[,n,j]);
    }
    // get loglik on a site level
    log_lik[n] = log_sum_exp(log_sum_exp(log_sum_exp(log_lik1[n,]),
    log_sum_exp(log_lik2_site[n,])), log_sum_exp(lp_site[,n]));
      for(s in 1:S) {
    //Site_lambda[s,n] = exp(log_lambda_psi[s,n]);
    N_site[s,n] = sum(n_obs_true[s,n,]);
    N_site_pred[s,n] = sum(n_obs_pred[s,n,]);
      }

  // loop over distance bins
  if(keyfun == 0) {
  for(j in 0:max_int_dist) { // get DS predictions for distance 0 to max bin distance
    DetCurve[n, j+1] = exp(-(j+0.5)^2/(2*sigma[n]^2)); // half normal
    }
  } else if(keyfun == 1) {
    for(j in 0:max_int_dist) { // get DS predictions for distance 0 to max bin distance
    DetCurve[n, j+1] =  1 - exp(-(j+0.5/sigma[n])^(-theta)); //hazard rate
    }
  }
}

for(s in 1:S) {
for(i in 1:np_reg) Nhat_reg[s,i] = 0;

for(i in 1:npc) {
  pred[s,i] = neg_binomial_2_log_rng(X_pred_psi[i,] * beta_psi[s] + eps_bioregion[s, pred_bioreg[i]] + eps_evc[s, pred_evc[i]], od[s, pred_bioreg[i]]) * prop_pred[i] * av_gs[s]; //offset
  if(pred[s,i] > max(N_site[s,])) {
    //pred_trunc[s,i] = max(N_site[s,]);
    trunc_counter[s] += 1;
  }
  Nhat_reg[s,pred_reg[i]] += pred[s,i];
}
Nhat[s] = sum(pred[s,]);
//Nhat_trunc[s] = sum(pred_trunc[s,]);
}
Nhat_sum = sum(Nhat);
}
Show code
model_negbin_ms <- cmdstan_model(here::here("stan", "count_det_nondet_negbin_ms.stan"))
model_negbin_ms_evc <- cmdstan_model(here::here("stan", "count_det_nondet_negbin_ms_evc.stan"))

Fit models

We can fit models using cmdstanr. Here we fit models using a poisson distribution. The models we compare are:

Show code
model_fits <- list()

for(i in 1:nrow(models_to_run)) {

  if(models_to_run$evc[i]) {
    model_to_fit <- model_negbin_ms_evc
  } else {
    model_to_fit <- model_negbin_ms
  }
  
  form_to_use <- get(models_to_run$formula[i])
  
  params_to_use <- c(TRUE, labels(terms(ab_formula_1)) %in% labels(terms(form_to_use)))
  
  data_to_use <- multispecies_data
  
  data_to_use$X_psi <- as.matrix(data_to_use$X_psi[,params_to_use])
  data_to_use$X_pred_psi <- as.matrix(data_to_use$X_pred_psi[,params_to_use])
  data_to_use$m_psi <- sum(params_to_use)
  
  data_to_use$keyfun <- models_to_run$keyfun[i]

  model_fits[[i]] <- model_to_fit$sample(data = data_to_use,
                                         chains = nc,
                                 parallel_chains = nc, 
                                 init = 0.1, 
                                 max_treedepth = 10, 
                                 refresh = 25, 
                                 show_messages = FALSE,
                                 save_warmup = FALSE,
                                 iter_sampling = ni, 
                                 iter_warmup = nw)

  model_fits[[i]]$save_object(paste0("outputs/models/fit_multispecies_", 
                                     i,".rds"))

  
}
Show code
prefix <- "outputs/models/fit_multispecies_"
model_fits <- list()
for(i in 1:nrow(models_to_run)) {
  file_to_read <- paste0(prefix, i, ".rds")
  
  model_fits[[i]] <- readRDS(file_to_read)
}

Model Evaluation

Our strategy for model evaluation is to determine the best performing model out of a limited set of options with different fixed-effects, random-effects and detection functions.

LOO (Leave-One-Out Cross-Validation).

We use leave-one-out cross-validation (LOO-CV); a Bayesian model evaluation process (Vehtari, Gelman, and Gabry 2017; Vehtari et al. 2020) to compare the models. Better performing models according to loo::loo() will have higher elpd values.

Show code
loos <- list()
for(i in 1:length(model_fits)) {
  loos[[i]] <- model_fits[[i]]$loo()
}

names(loos) <- paste("model", 1:length(model_fits))

loo_compare_table <- loo::loo_compare(x = loos)

# Plot loo table
  gt(loo_compare_table %>% 
       as.data.frame() %>% 
              tibble::rownames_to_column()) %>% 
    fmt_number(decimals = 2) %>%
    tab_style(locations = cells_row_groups(), style = cell_text(weight = "bold"))
elpd_diff se_diff elpd_loo se_elpd_loo p_loo se_p_loo looic se_looic
model 6 0.00 0.00 1,032.05 1.64 0.04 0.00 −2,064.10 3.28
model 7 −0.31 0.15 1,031.75 1.60 0.04 0.00 −2,063.49 3.21
model 1 −0.47 0.15 1,031.58 1.64 0.04 0.00 −2,063.16 3.28
model 2 −0.58 0.15 1,031.47 1.64 0.03 0.00 −2,062.95 3.28
model 3 −0.62 0.15 1,031.43 1.64 0.03 0.00 −2,062.86 3.29
model 5 −0.71 0.15 1,031.34 1.64 0.03 0.00 −2,062.69 3.28
model 4 −1.06 0.21 1,031.00 1.61 0.03 0.00 −2,061.99 3.22

Selecting the ‘top’ model

Based on the findings of loo and comparisons of other checks (posterior predictive checks, variation in posterior draws, plausibility of predictions), we see that all model perform similarly well using loo. However, several models produce predictions with lower levels of variation in the posterior predictions. Based on this, we use model 3 hereafter for analysis. This model has fixed effects of scale(sqrt(BareSoil)), scale(sqrt(Nitrogen)), scale(log(PastureDistance)), scale(BIO15) and scale(sqrt(ForestEdge)). It also has a random effect of bioregion for each species and an overdispersion parameter in the camera counts based on bioregion. This model does not have as many fixed effects as models 1 and 2, and also does not have a second random effect of evc alongside bioregion (model 6). Unfortunately, the additional random effect of evc produces much more variable and overdispersed results for Red Deer, so we opt to use model 3 instead.

Models with half-normal or hazard functions perform similarly according to loo. We opt to use the half-normal function as the detection function produced appears more sensible for a large mammal like deer. There is some variation between the half-normal and hazard rates with half-normal having a slightly higher average detection probability for group sizes 1-5 up to 12.5m (HN: 0.61, HZ: 0.44).

Show code
# assign index for top model
top <- 3

Posterior predictive checks

Posterior predictive checks allow us to compare the observed data to the model-generated data. For each species we undertake posterior predictive checks for summary statistics relating to the number of deer seen on the cameras at each site. Ideally a well-fit model is able to make predictions that match the observed data. Here counts are the number of snapshot moments with deer. The summary statistics we use for the posterior predictive checks are:

  1. Maximum counts of deer seen at a site on a camera
  2. Mean number of deer seen at a site on a camera
  3. Standard deviation of the counts of deer on the cameras
  4. Average (mean) counts at sites in a scatter plot
  5. Proportion of sites with zero counts (camera or transect)
  6. Observed vs expected proportion of counts

Sambar Deer

Based on the plot below, we see that the ppcs for sambar perform suffciently well.

Show code
q95 <- function(x) quantile(x, 0.95, na.rm = T)
# q25 <- function(x) quantile(x, 0.25, na.rm = T)
q75 <- function(x) quantile(x, 0.75, na.rm = T)
sd_90 <- function (x, na.rm = FALSE) {
  quants <- quantile(x, c(0.05, 0.95), na.rm = T)
  x <- x[x < quants[2] & x > quants[1]]
  sqrt(var(if (is.vector(x) || is.factor(x)) x else as.double(x),
    na.rm = na.rm))
}


funs <- c(max, mean, sd, ppc_scatter_avg, prop_zero, ppc_dens_overlay)
titles <- c("Max", "Mean", "SD", "Average Site Counts", "Proportion Zeros", "Density of Counts")

ppc_sambar <- list()

# funs <- c(prop_zero, mean, q90, sd)

for(i in 1:length(funs)) {
ppc_sambar[[i]] <- posterior_checks_multispecies(model = model_fits[[top]], 
                 model_data = multispecies_data, species_index = 1,
                 stat = funs[[i]], integrated = T,
                 title = titles[i])
}

cowplot::plot_grid(plotlist = ppc_sambar, labels = "AUTO", ncol = 2)
Posterior predictive checks for Sambar Deer

Figure 1: Posterior predictive checks for Sambar Deer

Fallow Deer

The ppcs below show that fallow has more dispersion in camera counts than Sambar. While the observed counts fall within the intervals of our model, there is higher uncertainty. Some of these findings appear to be driven by a site with very high counts (over 2000 snapshot moments of deer individuals), this is more than twice the second highest.

Show code
ppc_fallow <- list()

for(i in 1:length(funs)) {
ppc_fallow[[i]] <- posterior_checks_multispecies(model = model_fits[[top]], 
                 model_data = multispecies_data, species_index = 2,
                 stat = funs[[i]], integrated = T,
                 title = titles[i])
}

cowplot::plot_grid(plotlist = ppc_fallow, labels = "AUTO", ncol = 2)
Posterior predictive checks for Fallow Deer

Figure 2: Posterior predictive checks for Fallow Deer

Red Deer

Red Deer have ppcs that show a high congruence between predicted and expected counts.

Show code
ppc_red <- list()

for(i in 1:length(funs)) {
ppc_red[[i]] <- posterior_checks_multispecies(model = model_fits[[top]], 
                 model_data = multispecies_data, species_index = 3,
                 stat = funs[[i]], integrated = T, 
                 title = titles[i])
}

cowplot::plot_grid(plotlist = ppc_red, labels = "AUTO", ncol = 2)
Posterior predictive checks for Red Deer

Figure 3: Posterior predictive checks for Red Deer

Hog Deer

Hog Deer have ppcs that show our model performs sufficently well in explaining the patterns of observed counts.

Show code
ppc_hog <- list()

for(i in 1:length(funs)) {
ppc_hog[[i]] <- posterior_checks_multispecies(model = model_fits[[top]], 
                 model_data = multispecies_data, species_index = 4,
                 stat = funs[[i]], integrated = T,
                 title = titles[i])
}

cowplot::plot_grid(plotlist = ppc_hog, labels = "AUTO", ncol = 2)
Posterior predictive checks for Hog Deer

Figure 4: Posterior predictive checks for Hog Deer

Model Convergence

We observed no divergences or any STAN sampling warnings/issues for our models. To visualise the convergence of the model we can observe the mixing of chains for key parameters below:

Show code
convergence_params <- model_fits[[top]]$draws(c("beta_det", 
                                                "beta_psi",
                                                "bioregion_sd", 
                                                "beta_trans_det", 
                                                "odRN", 
                                                "od_mu", 
                                                "od_sd"))

mcmc_trace(convergence_params, facet_args = list(ncol = 4)) + 
  theme(panel.spacing = unit(0.5, "lines"))
Chain mixing of several key parameters relating to detection, abundance and dispersion

Figure 5: Chain mixing of several key parameters relating to detection, abundance and dispersion

Model Predictions

Within the STAN model we generate predictions for sampled and unsampled locations. This provides us with site-level abundance estimates as well as estimates across all (unsampled) public forest.

Site-based Predictions

Site-based Prediction Map

Visualisation of point-estimates for the various deer species surveyed for in this study.

Show code
vic_regions <- vicmap_query("open-data-platform:delwp_region") %>%
  collect() %>%
  st_transform(3111) %>%
  st_simplify(dTolerance = 500)


site_preds <- function(model, cams_curated, species_index) {
  
rn_dens <- model$summary("N_site", prob_occ = ~ mean(. > 0), trimmed_mean = ~mean(., trim = 0.025), posterior::default_summary_measures(), posterior::default_convergence_measures())
  
which_sp <- which(stringr::str_detect(string = rn_dens$variable,
                                         pattern = paste0("N_site\\[",
                                                          species_index)))
  

  density_at_sites_rn <- cbind(cams_curated, rn_dens[which_sp, ])
  
  return(density_at_sites_rn)
}

site_density_plot <- function(densities, regions, species) {
  densities$density <- cut(densities$trimmed_mean,
                                   breaks = c(0, 0.1, 1, 3, 5, 10, 15, max(densities$trimmed_mean)),
                                   labels = c("<0.1", "0.1 - 1", "1-3", "3 - 5", "5 - 10", "10 - 15", "15+"), include.lowest = T, right = T)
  
  if(species == "Hog") {
    regions <- regions %>% filter(delwp_region == "GIPPSLAND")
    densities <- densities %>% st_filter(regions %>% st_transform(4283))
  }
  
plot <- ggplot2::ggplot(data = densities) +
  ggplot2::geom_sf(data = regions, alpha = 0.75, fill = "grey80") +
  ggplot2::geom_sf(aes(fill = density, alpha = mean), shape = 21, size = 3) +
  scale_fill_viridis_d(name = "", guide = guide_legend(override.aes = list(size = 6))) +
  scale_alpha_continuous(range = c(0.5,1), guide = "none") +
  labs(title = paste0('Density of ',species ,' Deer'), fill = bquote('Deer per'~km^2)) +
  theme_bw() +
  theme(legend.text = element_text(size = 18), legend.key.size = unit(1, "cm"),
        title = element_text(size = 22))
  
return(plot)
  
}

sambar_preds <- site_preds(model_fits[[top]], cams_curated = cams_curated, species_index = 1)
fallow_preds <- site_preds(model_fits[[top]], cams_curated = cams_curated, species_index = 2)
red_preds <- site_preds(model_fits[[top]], cams_curated = cams_curated, species_index = 3)
hog_preds <- site_preds(model_fits[[top]], cams_curated = cams_curated, species_index = 4)

cowplot::plot_grid(site_density_plot(sambar_preds, vic_regions, species = "Sambar"),
site_density_plot(fallow_preds, vic_regions, species = "Fallow"),
site_density_plot(red_preds, vic_regions, species = "Red"), 
site_density_plot(hog_preds, vic_regions, species = "Red"), ncol = 1)
Site-level density estimates for all sites sampled as part of statewide and hog deer surveys. Point-density estimates are from trimmed means of the posterior draws (trimming of the top and bottom 1% of draws)

Figure 6: Site-level density estimates for all sites sampled as part of statewide and hog deer surveys. Point-density estimates are from trimmed means of the posterior draws (trimming of the top and bottom 1% of draws)

Site-density Summaries

As a sanity-check we compare the average modelled densities at sites with (i) no evidence of deer, (ii) evidence of deer present on transects, and (iii) evidence of deer present on cameras. We expect that average densities are generally higher at sites that detected some form of deer than sites that did not detect any sign of deer. Additionally, we would also expect average densities to be generally higher at sites that had detections on cameras than those with only detections from transects. The table below shows these expectations to be correct.

Show code
density_summary_table <- function(preds, model_data, species, species_index) {
  
  cam_seen <- as.integer(as.logical(rowSums(model_data$n_obs[,,species_index])))
  
  preds_sum <- preds %>% 
    st_drop_geometry() %>%
    mutate(`Species` = species,
           `CameraDetection` = cam_seen, 
           `AnyDetection` = model_data$any_seen[species_index, ], 
           `Detection` = case_when(CameraDetection == 0 & AnyDetection == 0 ~ "Not seen", 
                                   CameraDetection == 0 & AnyDetection == 1 ~ "Only detected on transects", 
                                   CameraDetection == 1 & AnyDetection == 1 ~ "Seen on cameras")) %>%
    group_by(`Species`, `Detection`) %>%
    summarise(`Number of Sites` = n(),
              `Average Density` = mean(trimmed_mean)) %>%
    ungroup()
  
  return(preds_sum)
}

density_summary <- bind_rows(
  density_summary_table(sambar_preds, multispecies_data, 
                        species = "Sambar", species_index = 1),
  density_summary_table(fallow_preds, multispecies_data, 
                        species = "Fallow", species_index = 2),
  density_summary_table(red_preds, multispecies_data, 
                        species = "Red", species_index = 3),
  density_summary_table(hog_preds, multispecies_data, 
                        species = "Hog", species_index = 4))

density_summary %>%
  kbl(format = "html", digits = 2, caption = "Average (98% trimmed mean) density estimates at the various sites groups of sites") %>%
  kable_styling("striped") %>%
  collapse_rows(1)
Table 1: Average (98% trimmed mean) density estimates at the various sites groups of sites
Species Detection Number of Sites Average Density
Sambar Not seen 180 0.64
Only detected on transects 33 1.82
Seen on cameras 104 2.84
Fallow Not seen 253 0.20
Only detected on transects 34 0.24
Seen on cameras 30 0.74
Red Not seen 304 0.02
Only detected on transects 1 0.94
Seen on cameras 12 1.10
Hog Not seen 295 0.17
Seen on cameras 22 2.01

Regional and Statewide Abundance

Within our model we calculate abundance/density for deer in each of the 73280 km2 of public land. Based on these spatial predictions we can estimate abundance at a regional level (6 DEECA regions) and across the whole state.

Statewide Maps

Using the model predictions (“pred”) for all suitable public land we generate a raster (1km2 resolution). We save the average spatial estimates under outputs/rasters (tif files) and also provide binned plots (png files) in outputs/plots.

Show code
pred_raster_full <- terra::rast(prediction_raster)

pred_raster <- terra::app(pred_raster_full[[stringr::str_subset(
  stringr::str_remove_all(labels(terms(ab_formula_1)),
                          "scale[(]|[)]|log[(]|sqrt[(]"), 
  pattern = "[*]", negate = T)]], mean)

mean_raster <- list()
mean_raster_discrete <- list()

# gp_preds_draws_all <- model_fits[[top]]$draws("pred", format = "matrix")
gp_preds_draws_all <- model_fits[[top]]$draws("pred", format = "matrix")

for(i in 1:length(deer_species_all)) {

which_sp <- which(stringr::str_detect(string = colnames(gp_preds_draws_all),
                                         pattern = paste0("pred\\[", i)))

gp_preds_draws_sp <- gp_preds_draws_all[,which_sp]

terra::values(pred_raster)[!is.na(terra::values(pred_raster))] <- apply(gp_preds_draws_sp, 
                                                                        2, 
                                                                        mean, 
                                                                        na.rm = T, 
                                                                        trim = 0.025)

mean_raster[[i]] <- pred_raster
mean_raster_discrete[[i]] <- mean_raster[[i]]
max_pred <- max(values(mean_raster[[i]]), na.rm = T)

values(mean_raster_discrete[[i]]) <- cut(values(mean_raster_discrete[[i]]) , 
                                         breaks = c(0, 0.1,1,3,5,10,15,max_pred), 
                                         include.lowest = T, right = T,
                                         labels = c("0 - 0.1", 
                                                    "0.1 - 1", 
                                                    "1 - 3", 
                                                    "3 - 5",
                                                    "5 - 10", 
                                                    "10 - 15",
                                                    "15 +"))
}

# combine mean rasters together
combined_raster <- rast(mean_raster)
names(combined_raster) <- c("Average Sambar Deer Density (per km2)", 
                            "Average Fallow Deer Density (per km2)", 
                            "Average Red Deer Density (per km2)", 
                            "Average Hog Deer Density (per km2)")

writeRaster(combined_raster, "outputs/rasters/combined_deer_average_density.tif", overwrite = T)
Show code
# reg <- VicmapR::vicmap_query("open-data-platform:delwp_region") %>%
#     VicmapR::collect() %>%
#   sf::st_transform(3111) %>%
#   sf::st_simplify(dTolerance = 250) 
state <- VicmapR::vicmap_query("open-data-platform:vmlite_victoria_polygon_su5") %>%
  filter(state != "NSW" & state != "SA" & feature_type_code != "sea") %>%
  VicmapR::collect() %>%
  sf::st_transform(3111)

gippsland <- vic_regions %>% filter(delwp_region == "GIPPSLAND")


plot_abundance <- function(raster, state, species, crop = NULL) {
  
  if(!is.null(crop)) {
    raster <- terra::crop(raster, vect(crop), mask = T)
    state <- crop
  }
  
  plot <- ggplot2::ggplot() +
    ggplot2::geom_sf(data = state, alpha = 0.5, linewidth = 0.5, fill = "grey80") + 
    tidyterra::geom_spatraster(data = raster, na.rm = T) + 
    tidyterra::scale_fill_terrain_d(na.translate = FALSE) + 
    ggplot2::labs(fill = bquote('Deer per'~km^2), title = paste0("Average Density of ", species, " on Victorian Public Land")) + 
    ggspatial::annotation_scale() 
  
  return(plot)
  
} 

sambar_abundance_plot <- plot_abundance(mean_raster_discrete[[1]], 
                                        state = state, 
                                        species = "Sambar Deer")
fallow_abundance_plot <- plot_abundance(mean_raster_discrete[[2]], 
                                        state = state, 
                                        species = "Fallow Deer")
red_abundance_plot <- plot_abundance(mean_raster_discrete[[3]], 
                                     state = state, 
                                     species = "Red Deer")
hog_abundance_plot <- plot_abundance(mean_raster_discrete[[4]], 
                                     state = state, crop = gippsland,
                                     species = "Hog Deer")

ggsave(plot = sambar_abundance_plot, filename = "outputs/plots/sambar_abundance_plot.png", 
       width = 2400, height = 1600, units = "px")
ggsave(plot = fallow_abundance_plot, filename = "outputs/plots/fallow_abundance_plot.png", 
       width = 2400, height = 1600, units = "px")
ggsave(plot = red_abundance_plot, filename = "outputs/plots/red_abundance_plot.png", 
       width = 2400, height = 1600, units = "px")
ggsave(plot = hog_abundance_plot, filename = "outputs/plots/hog_abundance_plot.png", 
       width = 2400, height = 1600, units = "px")

cowplot::plot_grid(sambar_abundance_plot, fallow_abundance_plot, red_abundance_plot, hog_abundance_plot, ncol = 1)
Abundance of Sambar, Fallow, Red and Hog Deer across Victoria, dark-grey area reflects area not included in predictions (i.e. not public land)

Figure 7: Abundance of Sambar, Fallow, Red and Hog Deer across Victoria, dark-grey area reflects area not included in predictions (i.e. not public land)

Regional Abundance Estimates

For each DEECA region we provide species-level estimates of abundance with 90 % confidence interval. We also calculate the modle-based average density within each region based on the abundance and the total area of public land within each region.

Show code
reg <- VicmapR::vicmap_query("open-data-platform:delwp_region") %>%
    VicmapR::collect() %>%
    dplyr::group_by(delwp_region) %>%
    dplyr::summarise(geometry = sf::st_combine(geometry)) %>%
    dplyr::ungroup() %>%
    dplyr::mutate(delwp_region_fact = as.integer(factor(delwp_region))) 

abundance_table <- function(model, regions, pred_area, caption, species_index) {
  
  reg <- model$summary("Nhat_reg", prob_occ = ~ mean(. > 0), trimmed_mean = ~mean(., trim = 0.025), posterior::default_summary_measures(), posterior::default_convergence_measures())
  
  which_sp <- which(stringr::str_detect(string = reg$variable,
                                         pattern = paste0("Nhat_reg\\[", species_index)))
  
  regional_abundance <- reg[which_sp,] %>%
    dplyr::bind_rows(model$summary("Nhat", prob_occ = ~ mean(. > 0), trimmed_mean = ~mean(., trim = 0.025), posterior::default_summary_measures(), posterior::default_convergence_measures())[species_index,]) %>% 
    dplyr::mutate(variable = c(regions$delwp_region, "TOTAL"), 
                  `Area km2` = c(pred_area, sum(pred_area)), 
                  `Average Density (per km2)` = round(median/`Area km2`, 2)) %>% 
    dplyr::select(Region = variable, 
                  `Mean (trimmed)` = trimmed_mean, 
                  Median = median, 
                  SD = sd, 
                  MAD = mad,
                  `5 %` = q5, 
                  `95 %` = q95, 
                  `Area km2`,
                  `Average Density (per km2)`)
  
  kableExtra::kbl(regional_abundance, digits = 1, format = "html", caption = caption) %>%
    kableExtra::kable_styling("striped") %>%
    kableExtra::column_spec(1, bold = TRUE) %>%
    kableExtra::row_spec(6, hline_after = T) %>%
    kableExtra::row_spec(7, background = "#c2a5cf", color = "black", bold = T, hline_after = T)
}

abundance_table(model_fits[[top]], regions = reg, pred_area = table(multispecies_data$pred_reg), caption = "Regional estimates of Sambar Deer Density", species_index = 1)
Table 2: Regional estimates of Sambar Deer Density
Region Mean (trimmed) Median SD MAD 5 % 95 % Area km2 Average Density (per km2)
BARWON SOUTH WEST 310.0 267.8 299.0 156.9 91.7 779.2 4739 0.1
GIPPSLAND 60244.5 59574.3 9925.1 9707.8 45899.4 77830.2 24920 2.4
GRAMPIANS 2134.6 2006.8 967.8 655.5 1197.7 3827.7 9515 0.2
HUME 58752.4 58237.9 10059.4 9154.2 44172.6 77265.9 16665 3.5
LODDON MALLEE 2038.7 1751.9 1661.0 768.5 956.0 4754.9 15209 0.1
PORT PHILLIP 5955.7 5798.7 1437.1 1361.4 3988.7 8741.6 2232 2.6
TOTAL 129833.3 128450.0 19106.9 17887.6 100427.8 163795.2 73280 1.8
Show code
abundance_table(model_fits[[top]], regions = reg, pred_area = table(multispecies_data$pred_reg), caption = "Regional estimates of Fallow Deer Density", species_index = 2)
Table 2: Regional estimates of Fallow Deer Density
Region Mean (trimmed) Median SD MAD 5 % 95 % Area km2 Average Density (per km2)
BARWON SOUTH WEST 11017.6 9533.5 10377.0 5022.0 4099.3 29406.1 4739 2.0
GIPPSLAND 12331.3 11814.0 3785.2 3268.2 7755.8 19537.8 24920 0.5
GRAMPIANS 4443.1 4101.9 3820.4 1331.6 2535.2 8128.5 9515 0.4
HUME 17830.1 17344.8 4942.6 4485.7 11485.1 26968.1 16665 1.0
LODDON MALLEE 3221.2 2847.7 2451.5 1113.6 1587.8 7247.1 15209 0.2
PORT PHILLIP 1777.0 1681.9 800.7 587.4 929.0 3107.6 2232 0.8
TOTAL 51573.7 49490.5 15647.9 11464.9 34973.9 78088.1 73280 0.7
Show code
abundance_table(model_fits[[top]], regions = reg, pred_area = table(multispecies_data$pred_reg), caption = "Regional estimates of Red Deer Density", species_index = 3)
Table 2: Regional estimates of Red Deer Density
Region Mean (trimmed) Median SD MAD 5 % 95 % Area km2 Average Density (per km2)
BARWON SOUTH WEST 4250.6 3760.3 3601.0 1672.7 1917.9 9583.9 4739 0.8
GIPPSLAND 911.4 862.2 464.8 364.6 390.9 1719.9 24920 0.0
GRAMPIANS 3458.1 3220.2 1729.9 1245.3 1753.8 6661.6 9515 0.3
HUME 3038.0 2863.4 1414.9 1205.3 1312.9 5652.7 16665 0.2
LODDON MALLEE 307.6 151.2 1055.4 165.4 12.5 1605.6 15209 0.0
PORT PHILLIP 158.6 139.3 110.6 81.4 49.5 369.8 2232 0.1
TOTAL 12464.4 11768.2 4973.4 3436.1 7452.8 21465.7 73280 0.2
Show code
abundance_table(model_fits[[top]], regions = reg, pred_area = table(multispecies_data$pred_reg), caption = "Regional estimates of Hog Deer Density", species_index = 4)
Table 2: Regional estimates of Hog Deer Density
Region Mean (trimmed) Median SD MAD 5 % 95 % Area km2 Average Density (per km2)
BARWON SOUTH WEST 94.0 62.3 227.7 68.5 4.1 358.6 4739 0.0
GIPPSLAND 4445.3 4182.5 1820.9 1327.3 2460.9 7939.7 24920 0.2
GRAMPIANS 66.3 40.4 331.5 43.1 3.1 278.4 9515 0.0
HUME 68.3 45.8 96.8 48.6 3.1 260.9 16665 0.0
LODDON MALLEE 48.6 19.7 344.2 24.7 0.0 277.8 15209 0.0
PORT PHILLIP 872.5 771.1 603.7 401.1 336.3 1932.1 2232 0.3
TOTAL 5727.7 5415.7 2187.1 1614.6 3253.8 10160.1 73280 0.1

Model Covariates

Our model uses covariates to inform three seperate submodels:

Detection Processes

At a site, there are two processes that may lead us to not record deer on a camera when in reality they occupy the site (whereby the site is the home range area surrounding the camera.) Firstly, deer may be available for detection and enter the sampling area in front of the camera. However, due to a function of their distance from the camera we fail to detect this individual. Secondly, they may occupy a site (known from transect signs) but never enter the camera field of view, in this case we estimate the various detection rates from the various methods. The detection rate of the camera in this method is regarded as the spatial availability of deer for camera trap sampling.

Distance-sampling

Below we show the average detection rate for a given group of deer in front of the camera (up to 12.5m).

Show code
det_rates <- model_fits[[top]]$summary("p") %>%
  mutate(var = stringr::str_extract(variable, "(?<=\\[).+?(?=\\])")) %>%
  separate(var, into = c("site", "Group Size"), sep = ",")

av_det_rates <- det_rates %>%
  group_by(`Group Size`) %>%
  summarise(mean = mean(mean)) %>%
  ungroup() %>% 
  transmute(`Group Size`, 
            `Average Site Detection Probability` = mean)


av_det_rates %>%
  bind_rows() %>%
  arrange(`Group Size`) %>%
  kbl("html", digits = 3, caption = "Average detection rates for different group sizes") %>%
  kable_styling("striped") %>%
  collapse_rows(1)
Table 3: Average detection rates for different group sizes
Group Size Average Site Detection Probability
1 0.421
2 0.553
3 0.634
4 0.690
5 0.730

We can generate detection function plots that show how detection rates fall-off over varying distances. Initial exploration showed that there may be some relationship between herbaceous understoery and detection probability for Sambar Deer. However, when pooled together we see that herbaceous understoery appears to have no effect on detection.

Show code
site_vars <- dplyr::tbl(con, dbplyr::in_schema("deervic", "curated_site_data")) %>%
    dplyr::filter(SiteID %in% !!c(cams_curated$SiteID, "47191")) %>%
    dplyr::collect() %>%
    dplyr::mutate(HerbaceousUnderstoryCover = NNWHUCover + ENWHUCover,
           SiteID = dplyr::case_when(SiteID == "47191" & CameraID == "HO04101053" ~ "47191A",
                              SiteID == "47191" & CameraID != "HO04101053" ~ "47191B",
                              TRUE ~ SiteID)) %>% # native + exotic herbaceous cover
    dplyr::arrange(SiteID) %>%
    as.data.frame()

  n_distance_bins <- 5
  delta <- 2.5
  midpts <- c(1.25, 3.75, 6.25, 8.75, 11.25)
  max_distance <- 12.5
  gs <- 1:5
  
  pa <- matrix(data = NA, nrow = max(gs), ncol = length(midpts))
  for(j in 1:max(gs)) {
    pa[j, ] <- sapply(midpts, function(x) {pr_closest(x-(0.5*delta),
                                                      x+(0.5*delta),
                                                      max_distance = max_distance,
                                                      n = j)})
  }


det_curve <- model_fits[[top]]$draws("DetCurve", format = "draws_matrix") %>%
  as.data.frame() %>%
  # head(250) %>%
  pivot_longer(cols = everything())

det_curve_wr <- det_curve %>%
  mutate(var = stringr::str_extract(name, "(?<=\\[).+?(?=\\])")) %>%
  separate(var, into = c("s", "Distance"), sep = ",") %>%
  mutate(Distance = as.numeric(Distance)-0.5)

det_vars_pred <- site_vars %>%
  mutate(s = as.character(1:nrow(.)),
         herbaceouslvl = cut(HerbaceousUnderstoryCover,
                             breaks = c(0, 25, 50, 75, 105), 
                             labels = c("0 - 25%", 
                                        "25 - 50%", 
                                        "50 - 75%", 
                                        "75 - 100%"),
                             include.lowest = T, right = FALSE))

det_curve_sum <- det_curve_wr %>%
  mutate(Distance = as.numeric(Distance)) %>%
  left_join(det_vars_pred) %>%
  group_by(herbaceouslvl, Distance) %>%
  summarise(median = quantile(value, 0.5),
            q5 = quantile(value, 0.05),
            q95 = quantile(value, 0.95))

scaleHN <- det_curve_sum %>% filter(Distance == 1.5) %>% pull(median) %>% mean() # Approx scaling

y_comb_list <- list()
for(i in 1:max(gs)) {
y_comb_list[[i]] <- colSums(multispecies_data$y[,,i]) %>% # just for group size 1
  as.data.frame() %>%
  `colnames<-`("Count") %>%
  mutate(Distance = midpts,
         Bin = 1:5,
         gs = i,
         Prop = Count/(max(Count)),
         CountS = Count/pa[Bin,i],
         PropS = CountS/max(CountS))

}

y_combined <- bind_rows(y_comb_list) %>% 
  group_by(Distance) %>%
  mutate(SumCount = sum(Count)) %>%
  ungroup() %>%
  mutate(propC = Count/SumCount, 
         PropSM = PropS * propC)  %>%
  group_by(Distance) %>% 
  summarise(PropS = sum(PropSM))

detection_plot_HN <- ggplot(aes(x = Distance), data = det_curve_sum) +
  geom_col(aes(y = PropS), fill = "grey70", colour = "grey20", width = 2.5, data = y_combined, alpha = 0.7) +
  # geom_errorbar(aes(ymin = PropSq5, ymax = PropSq95),  data = y_combined) +
  # geom_line(aes(y = HNS)) +
  geom_ribbon(aes(ymin = q5, ymax = q95, fill = herbaceouslvl), alpha = 0.5) +
  geom_line(aes(y = median, colour = herbaceouslvl)) +
  scale_x_continuous(expand = c(0, 0)) +
  scale_y_continuous(expand = c(0, 0), limits = c(0,1)) +
  scale_fill_brewer(palette = "Set1") +
  scale_colour_brewer(palette = "Set1") +
  ylab("Detection probability (p)") +
  labs(fill = "Herbaceous\nUnderstorey\nCover", colour = "Herbaceous\nUnderstorey\nCover") +
  theme_classic()

detection_plot_HN

Transect-surveys

For Sambar, Fallow and Red Deer we were able to estimate detection rates of the various methods used to detect deer (camera, footprints, pellets, rubbings and wallows).

Show code
all_draws <- model_fits[[top]]$draws("beta_trans_det", format = "df")

# det_marginal_effects <- list()
# det_plot <- list()

obs_vars <- unlist(stringr::str_remove_all(string = labels(terms(transect_formula)), pattern =  "log|scale|\\)|\\("))
obs_cols <- unlist(stringr::str_remove_all(string = colnames(multispecies_data$trans_pred_matrix), pattern =  "log|scale|\\)|\\("))
obs_logs <- unlist(stringr::str_detect(string = colnames(multispecies_data$trans_pred_matrix), pattern =  "log\\("))

obs_labs <- c("Survey Method")

fac <- c(TRUE, TRUE, TRUE, TRUE, TRUE)
fac2 <- c(TRUE)

det_obs <- multispecies_data$transects %>% 
  mutate(Survey = factor(Survey))

det_marginal_effects <- list()
det_plot <- list()

params_w_levs <- levels(det_obs$Survey)

for(i in 1:(length(obs_cols))) {
det_marginal_effects[[i]] <- marginal_effects_cmd(all_draws, 
                                              param = "beta_trans_det", 
                                              param_number = i, log = obs_logs[i],
                                     model_data = det_obs, 
                                     model_column = obs_vars[c(1,attr(multispecies_data$trans_pred_matrix, "assign")[-1])[i]], 
                                     transition = FALSE) %>%
  mutate(group = param, 
         param = params_w_levs[i], 
         factor = fac[i], 
         variable = as.numeric(variable))

if(fac[i]) {
  det_marginal_effects[[i]] <- det_marginal_effects[[i]]
}

}

marginal_prob <- function(x, pwr = 3) {
  xm <- 1-x
  return(1-(xm^pwr))
}

det_marginal_effects_bind <- bind_rows(det_marginal_effects) %>%
  rowwise() %>%
  mutate(value = case_when(param != "Camera" ~ marginal_prob(value), 
                           TRUE ~ value))

det_marginal_effects_split <- split(det_marginal_effects_bind, det_marginal_effects_bind$group)

for(i in 1:length(det_marginal_effects_split)) {
  
  if(fac2[i]) {
    plot_data <- det_marginal_effects_split[[i]] %>% 
      mutate(variable = param, 
             param = group)
  } else {
    plot_data <- det_marginal_effects_split[[i]]
  }

det_plot[[i]] <- plot_data

}
data_for_plot <- bind_rows(det_plot) %>% 
  mutate(species = "All (combined)") %>%
  mutate(Density = "1 deer/km^2^")

data_d3 <- data_for_plot %>% 
  mutate(value = 1 - (1 - value)^3, 
         Density = "3 deer/km^2^")

data_d5 <- data_for_plot %>% 
  mutate(value = 1 - (1 - value)^5, 
         Density = "5 deer/km^2^")

data_combined <- bind_rows(data_for_plot, data_d3, data_d5)

marginal_effects_plot_cmd_all(data_combined, 
                              factor = TRUE,
                              ylab = "Detection probability") +
  xlab("Survey method") +
  scale_y_continuous(limits = c(0,1)) +
  theme(legend.position = "top") +
  scale_fill_brewer(palette = "Set1") +
  # scale_x_discrete(expand = c(0, 0)) +
  theme_classic() +
  theme(legend.text = ggtext::element_markdown())
Detection Probability for the various methods of survey. Camera trap detection probability is based on average deployment length (53 days), while transects are based on 3 independent transect searches

Figure 8: Detection Probability for the various methods of survey. Camera trap detection probability is based on average deployment length (53 days), while transects are based on 3 independent transect searches

Abundance Processes

Bioregion

We used a spatially-derived random-effect of the bioregion of the sampled site. This random-effect allows us the ability to make predictions that include the variance associated with this random effect. This random-effect also minimises predictions in areas without deer detections (e.g. the mallee).

Show code
bioregion_contribution <- function(model, data, species, species_index) {
  eps_bioregion <- model$draws("eps_bioregion", format = "matrix")
  
  which_sp <- which(stringr::str_detect(string = colnames(eps_bioregion),
                                         pattern = paste0("eps_bioregion\\[", i)))
  
  eps_bioregion <- eps_bioregion[,which_sp]
  
  bioregion_data <- data[["bioreg_sf"]]
  colnames(eps_bioregion) <- bioregion_data$bioregion
  
  plot <- mcmc_areas(eps_bioregion, prob_outer = 0.9) +
    ggtitle(species)
  
  return(plot)
}

bio_plot_data <- list()

for(i in 1:length(species_names)) {
 bio_plot_data[[i]] <- bioregion_contribution(model = model_fits[[top]], 
                         data = multispecies_data, 
                         species = species_names[i], species_index = i)
} 

cowplot::plot_grid(plotlist = bio_plot_data, ncol = 1)
Influence of the bioregion random effect on abundance (log-scale).

Figure 9: Influence of the bioregion random effect on abundance (log-scale).

Fixed-parameter Effects

Show code
ab_joined_list <- list()
for(j in 1:length(species_names)) {
beta_draws <- model_fits[[top]]$draws("beta_psi", format = "df")

# which_sp <- which(stringr::str_detect(string = colnames(beta_draws),
#                                     pattern = paste0("beta_psi\\[", i)))
# 
# beta_draws <- beta_draws[,which_sp]

ab_marginal_effects <- list()
ab_plot <- list()

ab_to_use <- ab_formula_3

phi_vars <- unlist(stringr::str_remove_all(string = labels(terms(ab_to_use)), pattern =  "log|scale|sqrt|\\)|\\("))
phi_logs <- unlist(stringr::str_detect(string = labels(terms(ab_to_use)), pattern =  "log\\("))
phi_sqrts <- c(0.5,0.5,1,1,0.5)

phi_labs <- c("Bare Soil (%)", 
              "Nitrogen (%)", 
              "Distance to Pastural Land (m)", 
              "Precipitation Seasonality", 
              'Length of forest edge in 1km2 (m)')

fac <- c(FALSE, FALSE, FALSE, FALSE, FALSE)

for(i in 1:length(phi_vars)) {
ab_marginal_effects[[i]] <- marginal_effects_cmd(beta_draws, 
                                              param = "beta_psi", species_index = j,
                                              param_number = i+1, log = phi_logs[i],
                                     model_data = multispecies_data[["raw_data"]], 
                                     abundance = TRUE,
                                     pwr = phi_sqrts[i],
                                     model_column = phi_vars[i], 
                                     transition = FALSE) %>%
  mutate(species = species_names[j])

}
ab_joined_list[[j]] <- bind_rows(ab_marginal_effects)
}

all_me_data <- bind_rows(ab_joined_list) %>%
  mutate(param = case_when(param == "BareSoil" ~ "Bare Soil (%)", 
                           param == "Nitrogen" ~ "Nitrogen (%)", 
                           param == "PastureDistance" ~ "Distance to Pastural Land (m)", 
                           param == "BIO15" ~ "Precipitation Seasonality", 
                           param == "ForestEdge" ~ 'Amount of Forest Edge per km2 (m)'))
  

marginal_effects_plot_cmd_all(all_me_data, 
                          col = "DarkGreen", 
                          factor = FALSE,
                          ylab = "Contribution to Abundance (log-scale)") +
      ggplot2::facet_grid(species~param, scales = "free") +
  theme_bw() +
  xlab("Covariate Value")
Marginal response curves for the various fixed-effect parameters used in the model.

Figure 10: Marginal response curves for the various fixed-effect parameters used in the model.

Additional Parameters of Interest

Below is a table with several model parameters we estimated:

Show code
key_params <- model_fits[[top]]$summary(c("av_gs", 
                                          "od_mu", 
                                          "od_sd", 
                                          "odRN", 
                                          "activ", 
                                          "Nhat_sum"))

key_params %>% 
  kbl(format = "html", digits = 2, caption = "Several key outputs and parameters of the model not reported earlier") %>%
  kable_styling("striped")
Table 4: Several key outputs and parameters of the model not reported earlier
variable mean median sd mad q5 q95 rhat ess_bulk ess_tail
av_gs[1] 1.01 1.01 0.00 0.00 1.00 1.02 1.00 1270.50 1231.51
av_gs[2] 1.05 1.04 0.04 0.03 1.01 1.12 1.01 670.00 592.44
av_gs[3] 1.09 1.07 0.07 0.05 1.02 1.22 1.00 1582.31 1249.52
av_gs[4] 1.04 1.02 0.06 0.02 1.00 1.13 1.01 773.65 511.27
od_mu[1] -2.19 -2.20 0.30 0.28 -2.65 -1.67 1.00 809.55 973.29
od_mu[2] -3.47 -3.49 0.39 0.36 -4.08 -2.80 1.00 976.24 834.71
od_mu[3] -1.95 -2.00 0.79 0.80 -3.15 -0.57 1.00 1143.20 1254.46
od_mu[4] -1.85 -2.00 0.85 0.81 -2.96 -0.23 1.01 904.64 1065.69
od_sd[1] 0.82 0.76 0.33 0.29 0.41 1.43 1.00 761.98 1286.51
od_sd[2] 1.02 0.99 0.42 0.40 0.38 1.79 1.01 668.08 742.83
od_sd[3] 1.65 1.60 0.57 0.55 0.78 2.67 1.01 1167.15 948.38
od_sd[4] 0.94 0.85 0.61 0.62 0.10 2.09 1.01 676.11 724.69
odRN 0.92 0.89 0.21 0.19 0.64 1.32 1.00 1995.89 1349.31
activ 0.54 0.54 0.03 0.03 0.50 0.59 1.00 2757.39 1134.55
Nhat_sum 201454.16 198529.00 31021.57 29476.31 157629.95 255965.20 1.00 1282.80 1387.07

Relating abundance to vegetation

From our point density estimates of deer, we can model relationships between deer densities at sites and a range of structural vegetation measures collected during the surveys. For this part of the analysis we implement two different models to investigate the relationship between vegetation and vegetation.

Firstly we construct a multivariate gaussian model, where several response variables are continuous data measured at sites. They are:

These response variables have square-root transformations applied to allow them to be on a more normal scale,. Secondly we run a model with a multivariate response and a binomial logit-link. This allows us to model the relationship between deer densities and the following binomial reponse varibles:

Show code
density <- model_fits[[top]]$draws("N_site", format = "matrix")
site_estimates <- list()
site_density <- list()

for(i in 1:n_site) {
  site_estimates[[i]] <- character()
  for(j in 1:length(deer_species_all)) {
    site_estimates[[i]][j] <- paste0("N_site[", j, ",", i, "]")
  }
  site_density[[i]] <- sum(colMeans(density)[site_estimates[[i]]])
}

site_density_ul <- unlist(site_density)

site_density_df <- cams_curated %>%
  mutate(AllDeerDensity = site_density_ul, 
         Bioregion = factor(multispecies_data$site_bioreg), 
         EVC = factor(multispecies_data$site_evc)) %>% 
  filter(ProjectShortName == "StatewideDeer")

 site_vars <- dplyr::tbl(con, dbplyr::in_schema("deervic", "curated_site_data")) %>%
    dplyr::filter(SiteID %in% !!c(cams_curated$SiteID, "47191")) %>%
    dplyr::collect() %>%
    dplyr::mutate(HerbaceousUnderstoryCover = NNWHUCover + ENWHUCover,
                  Exotics = EWUCover + ENWHUCover,
                  ExoticPresence = as.integer(as.logical(Exotics)),
                  SeedlingsOrSaplings = as.integer(as.logical(Seedlings+Saplings)),
                  SeedlingsPresence = as.integer(as.logical(Seedlings)),
                  SaplingsPresence = as.integer(as.logical(Saplings)),
                  SiteID = dplyr::case_when(SiteID == "47191" & CameraID == "HO04101053" ~ "47191A",
                                            SiteID == "47191" & CameraID != "HO04101053" ~ "47191B",
                                            TRUE ~ SiteID)) %>% # native + exotic herbaceous cover
    dplyr::arrange(SiteID) %>%
    as.data.frame()
 
 site_density_df_joined <- site_density_df %>% 
   left_join(site_vars)

bform1 <- bf(mvbind(sqrt(CanopyCov), 
                    sqrt(BGroundCover), 
                    sqrt(NWUCover), 
                    sqrt(NNWHUCover), 
                    sqrt(TopHeight)) ~ sqrt(AllDeerDensity) + (1|EVC)) + set_rescor(FALSE) 

bform2 <- bf(mvbind(ExoticPresence, SeedlingsPresence, SaplingsPresence) ~ sqrt(AllDeerDensity) + (1|EVC))

fit1 <- brm(bform1, data = site_density_df_joined, chains = 2, cores = 2)
fit2 <- brm(bform2,  data = site_density_df_joined, chains = 2, cores = 2, family = bernoulli(link = "logit"))
Show code
sjPlot::tab_model(fit1, show.re.var = F)
  sqrt(BGroundCover) sqrt(CanopyCov) sqrt(NNWHUCover) sqrt(NWUCover) sqrt(TopHeight)
Predictors Estimates CI (95%) Estimates CI (95%) Estimates CI (95%) Estimates CI (95%) Estimates CI (95%)
Intercept 4.10 3.36 – 4.89 4.50 3.78 – 5.17 4.19 3.41 – 5.02 4.94 4.22 – 5.65 3.23 2.77 – 3.68
sqrtAllDeerDensity -0.86 -1.18 – -0.57 0.29 0.01 – 0.58 1.01 0.65 – 1.36 -0.16 -0.51 – 0.17 0.43 0.23 – 0.63
N EVC 19
Observations 252
Show code
sjPlot::tab_model(fit2, show.re.var = F)
  ExoticPresence SaplingsPresence SeedlingsPresence
Predictors Odds Ratios CI (95%) Odds Ratios CI (95%) Odds Ratios CI (95%)
Intercept 0.26 0.10 – 0.58 2.76 1.11 – 8.50 2.08 0.73 – 7.65
sqrtAllDeerDensity 1.58 1.10 – 2.35 1.06 0.71 – 1.59 1.02 0.67 – 1.49
ICC 0.61
N EVC 19
Observations 252
Royle, J. Andrew, and James D. Nichols. 2003. “Estimating Abundance from Repeated Presence-Absence Data or Point Counts.” Ecology 84 (March): 777–90. https://doi.org/10.1890/0012-9658(2003)084[0777:EAFRPA]2.0.CO;2.
Vehtari, Aki, Jonah Gabry, Mans Magnusson, Yuling Yao, Paul-Christian Bürkner, Topi Paananen, and Andrew Gelman. 2020. “Loo: Efficient Leave-One-Out Cross-Validation and WAIC for Bayesian Models.” https://mc-stan.org/loo/.
Vehtari, Aki, Andrew Gelman, and Jonah Gabry. 2017. “Practical Bayesian Model Evaluation Using Leave-One-Out Cross-Validation and WAIC.” Statistics and Computing 27 (5): 1413–32.

References